A Smoothing Descent Method for Nonconvex TV-Models

نویسندگان

  • Michael Hintermüller
  • Tao Wu
چکیده

A novel class of variational models with nonconvex lq-normtype regularizations (0 < q < 1) is considered, which typically outperforms those models with convex regularizations in restoring sparse images. Due to the fact that the objective function is nonconvex and non-Lipschitz, such nonconvex models are very challenging from an analytical as well as numerical point of view. In this work we propose a smoothing descent method with provable convergence properties. Numerical experiments are also reported to illustrate the effectiveness of our method.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Smoothing Descent Method for Nonconvex TV $$^q$$ -Models

A novel class of variational models with nonconvex `q-normtype regularizations (0 < q < 1) is considered, which typically outperforms popular models with convex regularizations in restoring sparse images. Due to the fact that the objective function is nonconvex and nonLipschitz, such models are very challenging from an analytical as well as numerical point of view. In this work a smoothing desc...

متن کامل

Fast Incremental Method for Nonconvex Optimization

We analyze a fast incremental aggregated gradient method for optimizing nonconvex problems of the form minx ∑ i fi(x). Specifically, we analyze the Saga algorithm within an Incremental First-order Oracle framework, and show that it converges to a stationary point provably faster than both gradient descent and stochastic gradient descent. We also discuss a Polyak’s special class of nonconvex pro...

متن کامل

Strong rules for nonconvex penalties and their implications for efficient algorithms in high-dimensional regression

We consider approaches for improving the efficiency of algorithms for fitting nonconvex penalized regression models such as SCAD and MCP in high dimensions. In particular, we develop rules for discarding variables during cyclic coordinate descent. This dimension reduction leads to a substantial improvement in the speed of these algorithms for high-dimensional problems. The rules we propose here...

متن کامل

An Interior-Point Algorithm for Nonconvex Nonlinear Programming

The paper describes an interior–point algorithm for nonconvex nonlinear programming which is a direct extension of interior–point methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Furthe...

متن کامل

Linear Convergence of Accelerated Stochastic Gradient Descent for Nonconvex Nonsmooth Optimization

In this paper, we study the stochastic gradient descent (SGD) method for the nonconvex nonsmooth optimization, and propose an accelerated SGD method by combining the variance reduction technique with Nesterov’s extrapolation technique. Moreover, based on the local error bound condition, we establish the linear convergence of our method to obtain a stationary point of the nonconvex optimization....

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012